181 research outputs found

    On the Uncertainty of Archive Hydrographic Datasets

    Get PDF
    As the international hydrographic community continues to address the question of the irreducible uncertainty in modern surveys, we must ask how we do the same with archived Vertical Beam Echosounder (VBES) and leadline datasets. The ONR funded Strataform project surveyed an area of the New Jersey shelf around 39◦12’N 72◦50’W using an EM1000 Multibeam Echosounder (MBES). This area is also covered by NOAA surveys from 1936- 38 (assumed to be leadline) and 1975-76 (VBES). By comparison of the archival soundings to the MBES data, estimates of measurement error for the archival surveys are constructed as a function of depth. The analysis shows that archival leadline smoothsheets are heavily biased in deeper water because of ‘hydrographic rounding’ and may be unrecoverable, but that the VBES data appear approximately unbiased and may be used to construct products compatible with modern surveys. Estimates of uncertainty for a surface model generated from the archive data are then constructed, taking into account measurement, interpolation, and hydrographic uncertainty (addressing the problems of unobserved areas and surface reconstruction stability). Finally, the paper addresses the generality of the method, and its implications for the community’s duty to convey our uncertainty to the end user

    Uncertainty Representation in Hydrographic Surveys and Products

    Get PDF

    Parallel and Distributed Performance of a Depth Estimation Algorithm

    Get PDF
    Expansion of dataset sizes and increasing complexity of processing algorithms have led to consideration of parallel and distributed implementations. The rationale for distributing the computational load may be to thin-provision computational resources, to accelerate data processing rate, or to efficiently reuse already available but otherwise idle computational resources. Whatever the rationale, an efficient solution of this type brings with it questions of data distribution, job partitioning, reliability, and robustness. This paper addresses the first two of these questions in the context of a local cluster-computing environment. Using the CHRT depth estimator, it considers active and passive data distribution and their effect on data throughput, focusing mainly on the compromises required to maintain minimal communications requirements between nodes. As metric, the algorithm considers the overall computation time for a given dataset (i.e., the time lag that a user would experience), and shows that although there are significant speedups to be had by relatively simple modifications to the algorithm, there are limitations to the parallelism that can be achieved efficiently, and a balance between inter-node parallelism (i.e., multiple nodes running in parallel) and intranode parallelism (i.e., multiple threads within one node) for most efficient utilization of available resources

    Multi-algorithm Swath Consistency Detection for Multibeam Echosounder Data

    Get PDF
    It is unrealistic to expect that any single algorithm for pre-filtering Multibeam Echosounder data will be able to detect all of the “noise in such data all of the time. This paper therefore presents a scheme for fusing the results of many pre-filtering sub-algorithms in order to form one, significantly more robust, meta-algorithm. This principle is illustrated on the problem of consistency detection in regions of sloping bathymetry. We show that the meta-algorithm is more robust, adapts dynamically to sub-algorithm performance, and is consistent with operator assessment of the data. The meta-algorithm is called the Multi-Algorithm Swath Consistency Detector

    Huddl: the Hydrographic Universal Data Description Language

    Get PDF
    Since many of the attempts to introduce a universal hydrographic data format have failed or have been only partially successful, a different approach is proposed. Our solution is the Hydrographic Universal Data Description Language (HUDDL), a descriptive XML-based language that permits the creation of a standardized description of (past, present, and future) data formats, and allows for applications like HUDDLER, a compiler that automatically creates drivers for data access and manipulation. HUDDL also represents a powerful solution for archiving data along with their structural description, as well as for cataloguing existing format specifications and their version control. HUDDL is intended to be an open, community-led initiative to simplify the issues involved in hydrographic data access

    Design and Implementation of an Extensible Variable Resolution Bathymetric Estimator

    Get PDF
    For grid-based bathymetric estimation techniques, determining the right resolution at which to work is essential. Appropriate grid resolution can be related, roughly, to data density and thence to sonar characteristics, survey methodology, and depth. It is therefore variable in almost all survey scenarios, and methods of addressing this problem can have enormous impact on the correctness and efficiency of computational schemes of this kind. This paper describes the design and implementation of a bathymetric depth estimation algorithm that attempts to address this problem by combining the computational efficiency of locally regular grids with piecewise-variable estimation resolution to provide a single logical data structure and associated algorithms that can adjust to local data conditions, change resolution where required to best support the data, and operate over essentially arbitrarily large areas as a single unit. The algorithm, which is in part a development of CUBE, is modular and extensible, and is structured as a client-server application to support different implementation modalities. The algorithm is called “CUBE with Hierarchical Resolution Techniques”, or CHRT

    HUDDL for description and archive of hydrographic binary data

    Get PDF
    Many of the attempts to introduce a universal hydrographic binary data format have failed or have been only partially successful. In essence, this is because such formats either have to simplify the data to such an extent that they only support the lowest common subset of all the formats covered, or they attempt to be a superset of all formats and quickly become cumbersome. Neither choice works well in practice. This paper presents a different approach: a standardized description of (past, present, and future) data formats using the Hydrographic Universal Data Description Language (HUDDL), a descriptive language implemented using the Extensible Markup Language (XML). That is, XML is used to provide a structural and physical description of a data format, rather than the content of a particular file. Done correctly, this opens the possibility of automatically generating both multi-language data parsers and documentation for format specification based on their HUDDL descriptions, as well as providing easy version control of them. This solution also provides a powerful approach for archiving a structural description of data along with the data, so that binary data will be easy to access in the future. Intending to provide a relatively low-effort solution to index the wide range of existing formats, we suggest the creation of a catalogue of format descriptions, each of them capturing the logical and physical specifications for a given data format (with its subsequent upgrades). A C/C++ parser code generator is used as an example prototype of one of the possible advantages of the adoption of such a hydrographic data format catalogue

    Traffic Analysis for the Calibration of Risk Assessment Methods

    Get PDF
    In order to provide some measure of the uncertainty inherent in the sorts of charting data that are provided to the end-user, we have previously proposed risk models that measure the magnitude of the uncertainty for a ship operating in a particular area. Calibration of these models is essential, but the complexity of the models means that we require detailed information on the sorts of ships, traffic patterns and density within the model area to make a reliable assessment. In theory, the ais system should provide this information for a suitably instrumented area. We consider the problem of converting, filtering and analysing the raw ais traffic to provide statistical characterizations of the traffic in a particular area, and illustrate the method with data from 2008-10-01 through 2008-11-30 around Norfolk, VA. We show that it is possible to automatically construct aggregate statistical characteristics of the port, resulting in distributions of transit location, termination and duration by vessel category, as well as type of traffic, physical dimensions, and intensity of activity. We also observe that although 60 days give us suffi- cient data for our immediate purposes, a large proportion of it—up to 52% by message volume—must be considered dubious due to difficulties in configuration, maintenance and operation of ais transceivers

    A Time Comparison of Computer-Assisted and Manual Bathymetric Processing

    Get PDF
    We describe an experiment designed to determine the time required to process Multibeam Echosounder (MBES) data using the CUBE (Combined Uncertainty and Bathymetry Estimator) [Calder & Mayer, 2003; Calder, 2003] and Navigation Surface [Smith et al., 2002; Smith, 2003] algorithms. We collected data for a small (22.3xl06 soundings) survey in Valdez Narrows, Alaska, and monitored person-hours expended on processing for a traditional MBES processing stream and the proposed computer-assisted method operating on identical data. The analysis shows that the vast majority of time expended in a traditional processing stream is in subjective hand-editing of data, followed by line planning and quality control, and that the computer-assisted method is significantly faster than the traditional process through its elimination of human interaction time. The potential improvement in editing time is shown to be on the order of 25-37:1 over traditional methods

    Development of a fusion adaptive algorithm for marine debris detection within the post-Sandy restoration framework

    Get PDF
    Recognition of marine debris represent a difficult task due to the extreme variability of the marine environment, the possible targets, and the variable skill levels of human operators. The range of potential targets is much wider than similar fields of research such as mine hunting, localization of unexploded ordnance or pipeline detection. In order to address this additional complexity, an adaptive algorithm is being developing that appropriately responds to changes in the environment, and context. The preliminary step is to properly geometrically and radiometrically correct the collected data. Then, the core engine manages the fusion of a set of statistically- and physically-based algorithms, working at different levels (swath, beam, snippet, and pixel) and using both predictive modeling (that is, a high-frequency acoustic backscatter model) and phenomenological (e.g., digital image processing techniques) approaches. The expected outcome is the reduction of inter-algorithmic cross-correlation and, thus, the probability of false alarm. At this early stage, we provide a proof of concept showing outcomes from algorithms that dynamically adapt themselves to the depth and average backscatter level met in the surveyed environment, targeting marine debris (modeled as objects of about 1-m size). The project relies on a modular software library, called Matador (Marine Target Detection and Object Recognition)
    corecore